人类或语言模型创建的文本内容通常被对手被盗或滥用。跟踪文本出处可以帮助索取文本内容的所有权,或者标识分发误导内容的恶意用户,如机器生成的假新闻。有一些尝试实现这一目标,主要基于水印技术。具体而言,传统文本水印方法通过略微改变文本格式,如线间距和字体略微改变,但是,这是易碎的跨媒体传输,如OCR。考虑到这一点,自然语言水印方法通过用手工杂志资源(例如Wordnet)的同义词替换原始句子中的单词来代表水印,但他们不考虑替换对整体句子的意义的影响。最近,提出了一种基于变换器的网络来通过修改不引人注意的单词(例如,功能词)来嵌入水印,这也损害了句子的逻辑和语义连贯性。此外,一个训练有素的网络在其他不同类型的文本内容上都会失败。为了解决上述限制,我们提出了一种基于背景感知词汇替代(LS)的自然语言水印方案。具体而言,我们使用BERT来推断候选人与原句与原始句子之间的语义相关性建议LS候选。基于此,进一步设计了在同步性和替代性方面的选择策略,以测试一个单词是否完全适合于携带水印信号。广泛的实验表明,在客观和主观度量下,我们的水印方案可以很好地保持原始句子的语义完整性,并且具有比现有方法更好的可转换性。此外,拟议的LS方法优于斯坦福词语替代基准测试的最先进的方法。
translated by 谷歌翻译
With the fast development of big data, it has been easier than before to learn the optimal decision rule by updating the decision rule recursively and making online decisions. We study the online statistical inference of model parameters in a contextual bandit framework of sequential decision-making. We propose a general framework for online and adaptive data collection environment that can update decision rules via weighted stochastic gradient descent. We allow different weighting schemes of the stochastic gradient and establish the asymptotic normality of the parameter estimator. Our proposed estimator significantly improves the asymptotic efficiency over the previous averaged SGD approach via inverse probability weights. We also conduct an optimality analysis on the weights in a linear regression setting. We provide a Bahadur representation of the proposed estimator and show that the remainder term in the Bahadur representation entails a slower convergence rate compared to classical SGD due to the adaptive data collection.
translated by 谷歌翻译
Denoising Diffusion Probabilistic Models (DDPMs) are emerging in text-to-speech (TTS) synthesis because of their strong capability of generating high-fidelity samples. However, their iterative refinement process in high-dimensional data space results in slow inference speed, which restricts their application in real-time systems. Previous works have explored speeding up by minimizing the number of inference steps but at the cost of sample quality. In this work, to improve the inference speed for DDPM-based TTS model while achieving high sample quality, we propose ResGrad, a lightweight diffusion model which learns to refine the output spectrogram of an existing TTS model (e.g., FastSpeech 2) by predicting the residual between the model output and the corresponding ground-truth speech. ResGrad has several advantages: 1) Compare with other acceleration methods for DDPM which need to synthesize speech from scratch, ResGrad reduces the complexity of task by changing the generation target from ground-truth mel-spectrogram to the residual, resulting into a more lightweight model and thus a smaller real-time factor. 2) ResGrad is employed in the inference process of the existing TTS model in a plug-and-play way, without re-training this model. We verify ResGrad on the single-speaker dataset LJSpeech and two more challenging datasets with multiple speakers (LibriTTS) and high sampling rate (VCTK). Experimental results show that in comparison with other speed-up methods of DDPMs: 1) ResGrad achieves better sample quality with the same inference speed measured by real-time factor; 2) with similar speech quality, ResGrad synthesizes speech faster than baseline methods by more than 10 times. Audio samples are available at https://resgrad1.github.io/.
translated by 谷歌翻译
Federated learning has recently been applied to recommendation systems to protect user privacy. In federated learning settings, recommendation systems can train recommendation models only collecting the intermediate parameters instead of the real user data, which greatly enhances the user privacy. Beside, federated recommendation systems enable to collaborate with other data platforms to improve recommended model performance while meeting the regulation and privacy constraints. However, federated recommendation systems faces many new challenges such as privacy, security, heterogeneity and communication costs. While significant research has been conducted in these areas, gaps in the surveying literature still exist. In this survey, we-(1) summarize some common privacy mechanisms used in federated recommendation systems and discuss the advantages and limitations of each mechanism; (2) review some robust aggregation strategies and several novel attacks against security; (3) summarize some approaches to address heterogeneity and communication costs problems; (4)introduce some open source platforms that can be used to build federated recommendation systems; (5) present some prospective research directions in the future. This survey can guide researchers and practitioners understand the research progress in these areas.
translated by 谷歌翻译
Is it possible for a first-order method, i.e., only first derivatives allowed, to be quadratically convergent? For univariate loss functions, the answer is yes -- the Steffensen method avoids second derivatives and is still quadratically convergent like Newton method. By incorporating an optimal step size we can even push its convergence order beyond quadratic to $1+\sqrt{2} \approx 2.414$. While such high convergence orders are a pointless overkill for a deterministic algorithm, they become rewarding when the algorithm is randomized for problems of massive sizes, as randomization invariably compromises convergence speed. We will introduce two adaptive learning rates inspired by the Steffensen method, intended for use in a stochastic optimization setting and requires no hyperparameter tuning aside from batch size. Extensive experiments show that they compare favorably with several existing first-order methods. When restricted to a quadratic objective, our stochastic Steffensen methods reduce to randomized Kaczmarz method -- note that this is not true for SGD or SLBFGS -- and thus we may also view our methods as a generalization of randomized Kaczmarz to arbitrary objectives.
translated by 谷歌翻译
Video super-resolution is one of the most popular tasks on mobile devices, being widely used for an automatic improvement of low-bitrate and low-resolution video streams. While numerous solutions have been proposed for this problem, they are usually quite computationally demanding, demonstrating low FPS rates and power efficiency on mobile devices. In this Mobile AI challenge, we address this problem and propose the participants to design an end-to-end real-time video super-resolution solution for mobile NPUs optimized for low energy consumption. The participants were provided with the REDS training dataset containing video sequences for a 4X video upscaling task. The runtime and power efficiency of all models was evaluated on the powerful MediaTek Dimensity 9000 platform with a dedicated AI processing unit capable of accelerating floating-point and quantized neural networks. All proposed solutions are fully compatible with the above NPU, demonstrating an up to 500 FPS rate and 0.2 [Watt / 30 FPS] power consumption. A detailed description of all models developed in the challenge is provided in this paper.
translated by 谷歌翻译
在体育视频中跟踪多个运动员是一项非常具有挑战性的多对象跟踪(MOT)任务,因为运动员通常具有相同的外观并且彼此密切相同,因此使常见的遮挡问题成为一个令人讨厌的重复检测。在本文中,重复检测是新的,精确地定义为闭塞,通过一帧在多个检测箱上在同一运动员上误会。为了解决这个问题,我们精心设计了一种基于变压器的新型副本检测器(d $^3 $),用于培训,以及一种特定的算法拉力赛 - 亨加利亚(RH)进行匹配。一旦发生重复检测,D $^3 $立即通过生成增强框损耗来修改过程。由团队运动替代规则触发的RH极为适合体育视频。此外,为了补充没有拍摄更改的跟踪数据集,我们根据名为RallyTrack的体育视频发布了一个新数据集。在RallyTrack上进行了广泛的实验表明,将D $^3 $和RH结合起来,可以通过MOTA中的9.2和4.5在Hota中大幅提高跟踪性能。同时,关于Mot系列和Dancetrack的实验发现,D $^3 $可以在训练过程中加速融合,尤其是在MOT17上节省多达80%的原始培训时间。最后,我们的模型只能通过排球视频进行培训,可以直接应用于MAT的篮球和足球视频,该视频显示了我们方法的优先级。我们的数据集可从https://github.com/heruihr/rallytrack获得。
translated by 谷歌翻译
自我训练在半监督学习中表现出巨大的潜力。它的核心思想是使用在标记数据上学习的模型来生成未标记样本的伪标签,然后自我教学。为了获得有效的监督,主动尝试通常会采用动量老师进行伪标签的预测,但要观察确认偏见问题,在这种情况下,错误的预测可能会提供错误的监督信号并在培训过程中积累。这种缺点的主要原因是,现行的自我训练框架充当以前的知识指导当前状态,因为老师仅与过去的学生更新。为了减轻这个问题,我们提出了一种新颖的自我训练策略,该策略使模型可以从未来学习。具体而言,在每个培训步骤中,我们都会首先优化学生(即,在不将其应用于模型权重的情况下缓存梯度),然后用虚拟未来的学生更新老师,最后要求老师为伪标记生产伪标签目前的学生作为指导。这样,我们设法提高了伪标签的质量,从而提高了性能。我们还通过深入(FST-D)和广泛(FST-W)窥视未来,开发了我们未来自我训练(FST)框架的两个变体。将无监督的域自适应语义分割和半监督语义分割的任务作为实例,我们在广泛的环境下实验表明了我们方法的有效性和优越性。代码将公开可用。
translated by 谷歌翻译
这项工作扩展了遗传指纹欺骗的先前进步,并引入了多样性和新颖的大师。该系统使用质量多样性进化算法来生成人造印刷的字典,重点是增加数据集对用户的覆盖范围。多样性大师图的重点是生成与以前发现的印刷品未涵盖的用户匹配的解决方案印刷品,而新颖的主版印刷明确地搜索了与以前的印刷品相比,在用户空间中更多的印刷品。我们的多印刷搜索方法在覆盖范围和概括方面都优于奇异的深层印刷,同时保持指纹图像输出的质量。
translated by 谷歌翻译
行为预测在集成自主驾驶软件解决方案中起着重要作用。在行为预测研究中,与单一代理行为预测相比,交互行为预测是一个较小的领域。预测互动剂的运动需要启动新的机制来捕获交互式对的关节行为。在这项工作中,我们将端到端的关节预测问题作为边际学习和车辆行为联合学习的顺序学习过程。我们提出了ProspectNet,这是一个采用加权注意分数的联合学习块,以模拟交互式剂对之间的相互影响。联合学习块首先权衡多模式预测的候选轨迹,然后通过交叉注意更新自我代理的嵌入。此外,我们将每个交互式代理的个人未来预测播放到一个智慧评分模块中,以选择顶部的$ K $预测对。我们表明,ProspectNet优于两个边际预测的笛卡尔产品,并在Waymo交互式运动预测基准上实现了可比的性能。
translated by 谷歌翻译